首页> 外文OA文献 >Comparing error minimized extreme learning machines and support vector sequential feed-forward neural networks
【2h】

Comparing error minimized extreme learning machines and support vector sequential feed-forward neural networks

机译:比较误差最小化极端学习机和支持向量序列前馈神经网络

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Recently, error minimized extreme learning machines (EM-ELMs) have been proposed as a simple and efficient approach to build single-hidden-layer feed-forward networks (SLFNs) sequentially. They add random hidden nodes one by one (or group by group) and update the output weights incrementally to\udminimize the sum-of-squares error in the training set. Other very similar methods that also construct SLFNs sequentially had been reported earlier with the main difference that their hidden-layer weights\udare a subset of the data instead of being random. These approaches are referred to as support vector sequential feed-forward neural networks (SV-SFNNs), and they are a particular case of the sequential approximation with optimal coefficients and interacting frequencies (SAOCIF) method. In this paper, it is firstly shown that EM-ELMs can also be cast as a particular case of SAOCIF. In particular, EM-ELMs can easily be extended to test some number of random candidates at each step and select the best of them, as SAOCIF does. Moreover, it is demonstrated that the cost of the computation of the optimal outputlayer\udweights in the originally proposed EM-ELMs can be improved if it is replaced by the one included in SAOCIF. Secondly, we present the results of an experimental study on 10 benchmark classification and 10 benchmark regression data sets, comparing EM-ELMs and SV-SFNNs, that was carried out under the\udsame conditions for the two models. Although both models have the same (efficient) computational cost, a statistically significant improvement in generalization performance of SV-SFNNs vs. EM-ELMs was found\udin 12 out of the 20 benchmark problems.
机译:最近,已经提出了将误差最小化的极限学习机(EM-ELM)作为一种简单有效的方法来依次构建单隐藏层前馈网络(SLFN)。他们一个接一个地添加随机隐藏节点(或逐组添加),并逐步更新输出权重,以\ udminimum训练集中的平方和误差。先前已经报道了其他也可以顺序构建SLFN的非常相似的方法,其主要区别在于它们的隐层权重是数据的子集,而不是随机的。这些方法称为支持向量顺序前馈神经网络(SV-SFNN),它们是采用最佳系数和交互频率的顺序逼近的特殊情况(SAOCIF)方法。本文首先表明,EM-ELM也可以作为SAOCIF的特殊情况进行转换。特别是,EM-ELM可以很容易地扩展为在每个步骤中测试一定数量的随机候选者,并像SAOCIF一样选择最佳候选者。此外,证明了如果用SAOCIF中包含的EM-ELM替代,则可以提高最初提出的EM-ELM中最佳输出层\ udweight的计算成本。其次,我们提供了在10种基准分类和10种基准回归数据集上进行实验研究的结果,比较了EM-ELM和SV-SFNN,这是在两种条件下在\ udsame条件下进行的。尽管两个模型的计算成本相同(有效),但在20个基准问题中,有12个发现了SV-SFNN与EM-ELM的泛化性能的统计显着改善。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号